In computing, telecommunication, information theory, and coding theory, forward error correction ( FEC) or channel coding is a technique used for error control in data transmission over unreliable or noisy communication channels.
The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code, or error correcting code ( ECC). The redundancy allows the receiver not only to error detection that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth.
The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code.
FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in multicast.
Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used in and in .
FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate a bit-error rate (BER) signal which can be used as feedback to fine-tune the analog receiving electronics.
FEC information is added to mass storage (magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used as ECC memory computer memory on systems that require special provisions for reliability.
The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective signal-to-noise ratio. The noisy-channel coding theorem of Claude Shannon can be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems like polar code come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame.
A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1) repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below.
000 | 0 (error-free) |
001 | 0 |
010 | 0 |
100 | 0 |
111 | 1 (error-free) |
110 | 1 |
101 | 1 |
011 | 1 |
Most telecommunication systems use a fixed channel code designed to tolerate the expected worst-case bit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: some instances of hybrid automatic repeat-request use a fixed ECC method as long as the ECC can handle the error rate, then switch to ARQ when the error rate gets too high; adaptive modulation and coding uses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed.
There are many types of block codes; Reed–Solomon coding is noteworthy for its widespread use in , , and hard disk drives. Other examples of classical block codes include Golay, BCH code, Multidimensional parity, and .
Hamming ECC is commonly used to correct ECC memory and early SLC NAND flash memory errors. "Hamming codes for NAND flash memory devices" . EE Times-Asia. Apparently based on "Micron Technical Note TN-29-08: Hamming Codes for NAND Flash Memory Devices" . 2005. Both say: "The Hamming algorithm is an industry-accepted method for error detection and correction in many SLC NAND flash-based applications."
This provides single-bit error correction and 2-bit error detection.
Hamming codes are only suitable for more reliable single-level cell (SLC) NAND.
Denser multi-level cell (MLC) NAND may use multi-bit correcting ECC such as BCH code, Reed–Solomon, or LDPC.
Classical block codes are usually decoded using hard-decision algorithms, which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using soft-decision algorithms like the Viterbi, MAP or BCJR algorithm algorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding.
Nearly all classical block codes apply the algebraic properties of . Hence classical block codes are often referred to as algebraic codes.
In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such as LDPC codes lack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates.
Most forward error correction codes correct only bit-flips, but not bit-insertions or bit-deletions.
In this setting, the Hamming distance is the appropriate way to measure the bit error rate.
A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes.
The Levenshtein distance is a more appropriate way to measure the bit error rate when using such codes.
The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate. In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection.
One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero: His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement.
The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.
Concatenated codes have been standard practice in satellite and deep space communications since Voyager program first used the technique in its 1986 encounter with Uranus. The Galileo craft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna.
LDPC codes were first introduced by Robert G. Gallager in his PhD thesis in 1960,
but due to the computational effort in implementing encoder and decoder and the introduction of Reed–Solomon codes,
they were mostly ignored until the 1990s.
LDPC codes are now used in many recent high-speed communication standards, such as DVB-S2 (Digital Video Broadcasting – Satellite – Second Generation), WiMAX (IEEE 802.16e standard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n),IEEE Standard, section 20.3.11.6 "802.11n-2009" , IEEE, 29 October 2009, accessed 21 March 2011. 10GBase-T Ethernet (802.3an) and G.hn/G.9960 (ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within 3GPP MBMS (see fountain codes).
One of the earliest commercial applications of turbo coding was the CDMA2000 1x (TIA IS-2000) digital cellular technology developed by Qualcomm and sold by Verizon Wireless, Sprint Nextel, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access, 1xEV-DO (TIA IS-856). Like 1x, EV-DO was developed by Qualcomm, and is sold by Verizon Wireless, Sprint Nextel, and other carriers (Verizon's marketing name for 1xEV-DO is Broadband Access, Sprint's consumer and business marketing names for 1xEV-DO are Power Vision and Mobile Broadband, respectively).
Locally decodable codes are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. Locally testable codes are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal.
Not all testing codes are locally decoding and testing of codes
Not all locally decodable codes (LDCs) are locally testable codes (LTCs) neither locally correctable codes (LCCs), q-query LCCs are bounded exponentially
The analysis of modern iterated codes, like and , typically assumes an independent distribution of errors. Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word.
For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance. The iterative decoding algorithm works best when there are not short cycles in the factor graph that represents the decoder; the interleaver is chosen to avoid short cycles.
Interleaver designs include:
In multi-carrier signal communication systems, interleaving across carriers may be employed to provide frequency diversity scheme, e.g., to mitigate frequency-selective fading or narrowband interference.
Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword is altered in one bit and can be corrected, but the codeword is altered in three bits, so either it cannot be decoded at all or it might be falsing.
With interleaving:
In each of the codewords "", "", "", and "", only one bit is altered, so one-bit error-correcting code will decode everything correctly.
Transmission without interleaving:
The term "" ends up mostly unintelligible and difficult to correct.
With interleaving:
No word is completely lost and the missing letters can be recovered with minimal guesswork.
In this context, there are various available Open-source software listed below (non exhaustive).
/ref> NOR flash typically does not use any error correction.
Code-rate and the tradeoff between reliability and data rate
Concatenated ECC codes for improved performance
Low-density parity-check (LDPC)
Turbo codes
Local decoding and testing of codes
Interleaving
Example
Error-free message:
Transmission with a burst error:
Error-free code words:
Interleaved:
Transmission with a burst error:
Received code words after deinterleaving:
Original transmitted sentence:
Received sentence with a burst error:
Transmitted sentence:
Error-free transmission:
Received sentence with a burst error:
Received sentence after deinterleaving:
Disadvantages of interleaving
Software for error-correcting codes
List of error-correcting codes
Parity Triple modular redundancy Perfect Hamming such as Hamming(7,4) Extended Hamming Nordstrom-Robinson code Perfect binary Golay code Extended binary Golay code
See also
Further reading
External links
|
|